机器学习被认为是量子计算最有前途的应用之一。因此,寻找机器学习模型的量子类似物的量子优势是一个关键的研究目标。在这里,我们表明,具有量子内核(QSVM)的变异量子分类器(VQC)和支持向量机可以基于K-相关问题解决分类问题,该问题已知是PromiseBQP complete。由于PromistBQP复杂度类包括所有有界的量子量子多项式时间(BQP)决策问题,因此我们的结果暗示存在特征图和量子内核,该量子内核使VQC和QSVM有效求解器用于任何BQP问题。这意味着可以设计VQC或QSVM的量子内核的特征图,以对任何在多项式时间内无法经典求解但与量子计算机相反的分类问题具有量子优势。
translated by 谷歌翻译
While quantum machine learning (ML) has been proposed to be one of the most promising applications of quantum computing, how to build quantum ML models that outperform classical ML remains a major open question. Here, we demonstrate a Bayesian algorithm for constructing quantum kernels for support vector machines that adapts quantum gate sequences to data. The algorithm increases the complexity of quantum circuits incrementally by appending quantum gates selected with Bayesian information criterion as circuit selection metric and Bayesian optimization of the parameters of the locally optimal quantum circuits identified. The performance of the resulting quantum models for classification problems with a small number of training points significantly exceeds that of optimized classical models with conventional kernels.
translated by 谷歌翻译
目前,Covid-19的发展使研究人员可以收集2年内积累的数据集并将其用于预测分析。反过来,这可以评估更复杂的预测模型的效率潜力,包括具有不同预测范围的神经网络。在本文中,我们介绍了基于两个国家的区域数据:美国和俄罗斯的区域数据,对不同类型的方法进行了一致的比较研究结果。我们使用了众所周知的统计方法(例如,指数平滑),一种“明天”方法,以及一套经过来自各个地区数据的经典机器学习模型。与他们一起,考虑了基于长期记忆(LSTM)层的神经网络模型,这些培训样本的培训样本汇总了来自两个国家 /地区的所有地区:美国和俄罗斯。根据MAPE度量,使用交叉验证进行效率评估。结果表明,对于以确认的每日案例数量大幅增加的复杂时期,最佳结果是由在两国所有地区训练的LSTM模型显示的,显示平均平均绝对百分比误差(MAPE)为18%在俄罗斯为30%,37%,31%,41%,50%的预测范围为14、28和42天。
translated by 谷歌翻译
不可能的定理表明,如权利要求中所述,不能解决特定问题或一组问题。这些定理对人工智能有可能进行限制,特别是超级智能人员。因此,这些结果担任AI安全,AI政策和治理研究人员的指导方针,提醒和警告。这些可能在规范满足框架内的形式使某些长期问题的解决方案能够在不致力于一种选择的情况下进行规范化理论。在本文中,我们对AI领域的不盘定定理分为五类:扣除,欺诈性,归纳,权衡和难治性。我们发现某些定理太具体或具有限制应用的隐含假设。此外,我们为释放性的不公平添加了新的结果(定理),归纳类别中的第一个解释性相关结果。我们得出结论,扣除减免否认100%的保安。最后,我们给出了一些思想,以持有可解释性,可控性,价值对准,道德和团体决策的潜力。他们可以通过进一步调查来加深。
translated by 谷歌翻译
In this paper we explore the task of modeling (semi) structured object sequences; in particular we focus our attention on the problem of developing a structure-aware input representation for such sequences. In such sequences, we assume that each structured object is represented by a set of key-value pairs which encode the attributes of the structured object. Given a universe of keys, a sequence of structured objects can then be viewed as an evolution of the values for each key, over time. We encode and construct a sequential representation using the values for a particular key (Temporal Value Modeling - TVM) and then self-attend over the set of key-conditioned value sequences to a create a representation of the structured object sequence (Key Aggregation - KA). We pre-train and fine-tune the two components independently and present an innovative training schedule that interleaves the training of both modules with shared attention heads. We find that this iterative two part-training results in better performance than a unified network with hierarchical encoding as well as over, other methods that use a {\em record-view} representation of the sequence \cite{de2021transformers4rec} or a simple {\em flattened} representation of the sequence. We conduct experiments using real-world data to demonstrate the advantage of interleaving TVM-KA on multiple tasks and detailed ablation studies motivating our modeling choices. We find that our approach performs better than flattening sequence objects and also allows us to operate on significantly larger sequences than existing methods.
translated by 谷歌翻译
Optical coherence tomography (OCT) captures cross-sectional data and is used for the screening, monitoring, and treatment planning of retinal diseases. Technological developments to increase the speed of acquisition often results in systems with a narrower spectral bandwidth, and hence a lower axial resolution. Traditionally, image-processing-based techniques have been utilized to reconstruct subsampled OCT data and more recently, deep-learning-based methods have been explored. In this study, we simulate reduced axial scan (A-scan) resolution by Gaussian windowing in the spectral domain and investigate the use of a learning-based approach for image feature reconstruction. In anticipation of the reduced resolution that accompanies wide-field OCT systems, we build upon super-resolution techniques to explore methods to better aid clinicians in their decision-making to improve patient outcomes, by reconstructing lost features using a pixel-to-pixel approach with an altered super-resolution generative adversarial network (SRGAN) architecture.
translated by 谷歌翻译
Real-life tools for decision-making in many critical domains are based on ranking results. With the increasing awareness of algorithmic fairness, recent works have presented measures for fairness in ranking. Many of those definitions consider the representation of different ``protected groups'', in the top-$k$ ranked items, for any reasonable $k$. Given the protected groups, confirming algorithmic fairness is a simple task. However, the groups' definitions may be unknown in advance. In this paper, we study the problem of detecting groups with biased representation in the top-$k$ ranked items, eliminating the need to pre-define protected groups. The number of such groups possible can be exponential, making the problem hard. We propose efficient search algorithms for two different fairness measures: global representation bounds, and proportional representation. Then we propose a method to explain the bias in the representations of groups utilizing the notion of Shapley values. We conclude with an experimental study, showing the scalability of our approach and demonstrating the usefulness of the proposed algorithms.
translated by 谷歌翻译
The previous fine-grained datasets mainly focus on classification and are often captured in a controlled setup, with the camera focusing on the objects. We introduce the first Fine-Grained Vehicle Detection (FGVD) dataset in the wild, captured from a moving camera mounted on a car. It contains 5502 scene images with 210 unique fine-grained labels of multiple vehicle types organized in a three-level hierarchy. While previous classification datasets also include makes for different kinds of cars, the FGVD dataset introduces new class labels for categorizing two-wheelers, autorickshaws, and trucks. The FGVD dataset is challenging as it has vehicles in complex traffic scenarios with intra-class and inter-class variations in types, scale, pose, occlusion, and lighting conditions. The current object detectors like yolov5 and faster RCNN perform poorly on our dataset due to a lack of hierarchical modeling. Along with providing baseline results for existing object detectors on FGVD Dataset, we also present the results of a combination of an existing detector and the recent Hierarchical Residual Network (HRN) classifier for the FGVD task. Finally, we show that FGVD vehicle images are the most challenging to classify among the fine-grained datasets.
translated by 谷歌翻译
Recent advances in deep learning have enabled us to address the curse of dimensionality (COD) by solving problems in higher dimensions. A subset of such approaches of addressing the COD has led us to solving high-dimensional PDEs. This has resulted in opening doors to solving a variety of real-world problems ranging from mathematical finance to stochastic control for industrial applications. Although feasible, these deep learning methods are still constrained by training time and memory. Tackling these shortcomings, Tensor Neural Networks (TNN) demonstrate that they can provide significant parameter savings while attaining the same accuracy as compared to the classical Dense Neural Network (DNN). In addition, we also show how TNN can be trained faster than DNN for the same accuracy. Besides TNN, we also introduce Tensor Network Initializer (TNN Init), a weight initialization scheme that leads to faster convergence with smaller variance for an equivalent parameter count as compared to a DNN. We benchmark TNN and TNN Init by applying them to solve the parabolic PDE associated with the Heston model, which is widely used in financial pricing theory.
translated by 谷歌翻译
Three main points: 1. Data Science (DS) will be increasingly important to heliophysics; 2. Methods of heliophysics science discovery will continually evolve, requiring the use of learning technologies [e.g., machine learning (ML)] that are applied rigorously and that are capable of supporting discovery; and 3. To grow with the pace of data, technology, and workforce changes, heliophysics requires a new approach to the representation of knowledge.
translated by 谷歌翻译